6 research outputs found

    Automating the Surveillance of Mosquito Vectors from Trapped Specimens Using Computer Vision Techniques

    Full text link
    Among all animals, mosquitoes are responsible for the most deaths worldwide. Interestingly, not all types of mosquitoes spread diseases, but rather, a select few alone are competent enough to do so. In the case of any disease outbreak, an important first step is surveillance of vectors (i.e., those mosquitoes capable of spreading diseases). To do this today, public health workers lay several mosquito traps in the area of interest. Hundreds of mosquitoes will get trapped. Naturally, among these hundreds, taxonomists have to identify only the vectors to gauge their density. This process today is manual, requires complex expertise/ training, and is based on visual inspection of each trapped specimen under a microscope. It is long, stressful and self-limiting. This paper presents an innovative solution to this problem. Our technique assumes the presence of an embedded camera (similar to those in smart-phones) that can take pictures of trapped mosquitoes. Our techniques proposed here will then process these images to automatically classify the genus and species type. Our CNN model based on Inception-ResNet V2 and Transfer Learning yielded an overall accuracy of 80% in classifying mosquitoes when trained on 25,867 images of 250 trapped mosquito vector specimens captured via many smart-phone cameras. In particular, the accuracy of our model in classifying Aedes aegypti and Anopheles stephensi mosquitoes (both of which are deadly vectors) is amongst the highest. We present important lessons learned and practical impact of our techniques towards the end of the paper

    Automating the Classification of Mosquito Specimens Using Image Processing Techniques

    Get PDF
    According to WHO (World Health Organization) reports, among all animals, mosquitoes are responsible for the most deaths worldwide. Mosquito borne diseases continue to pose grave dangers to global health. In 2015 alone, 214 million cases of malaria were registered worldwide. According to Centers for Disease Control and Prevention (CDC) report published in 2016, 62,500 suspected case of Zika were reported to the Puerto Rico Department of Health (PRDH) out of which 29,345 cases were found positive. The year 2019 was recorded as the worst for dengue in South East Asia. There are close to 4,500 species of mosquitoes (spread across 34 or so genera, but only a select few are competent vectors. These vectors primarily belong to three genera - Aedes (Ae.), Anopheles ( An.) and Culex ( Cu.). Within these genera, there are multiple species responsible for transmitting particular diseases. Malaria is spread primarily by An. gambiae in Africa and by An. stephensi in India. Dengue, yellow fever, chikungunya, and the Zika fever are spread primarily by the species Ae. aegypti. Cu. nigripalpus is a vector for West Nile and other encephalitis viruses. Since, not all types of mosquitoes spread diseases, in the case of any disease outbreak, an important first step is surveillance of vectors (i.e., those mosquitoes capable of spreading diseases). To do this today, public health workers lay several mosquito traps in the area of interest. Hundreds of mosquitoes will get trapped. Naturally, among these hundreds, taxonomists have to identify only the vectors to gauge their density. Unfortunately, species identification is still visual today, and is a laborious and very cognitively stressful process with trained personnel spending significant hours each day looking at each specimen with a microscope for accurate identification and recording. In this dissertation, we first started by exploring the feasibility of developing an AI-enabled smart-phone based system to identify mosquito species using image based classification algorithm. We trained our algorithm on 303 images spread across 9 mosquito species that served more as a proof of concept to show that it is entirely feasible that common citizens also use our technique to identify species in their homes that can provide significant benefits to both residents and mosquito control programs in general. Our system integrates image processing, feature selection, unsupervised clustering, and an Support vector machine based machine learning algorithm for the species classification. The overall accuracy of our system for all 9 species is 77.5%. After achieving encouraging results from our preliminary work, we collected more diverse mosquito images, which contains images taken with diverse array of smartphones, and in multiple backgrounds and orientations. With this larger scale dataset, we designed a deep learning architectures for three classes of problems a) to identify genus alone; b) to identify species based on knowledge of genus type; and finally c) to directly identify the species type. We also contrast the performance of each architecture and provide contextual relevance to ensuing results. Our CNN model based on Inception-ResNet V2 and Transfer Learning yielded an overall accuracy of 80% in classifying mosquitoes when trained on 25,867 images of 250 trapped mosquito vector specimens captured via many smart-phone cameras. In particular, the accuracy of our model in classifying Aedes aegypti and Anopheles stephensi mosquitoes (both of which are deadly vectors) are amongst the highest. Next, to remove the effect of background noise as well as to concentrate the focus entirely on mosquito anatomy, we designed a framework based on state-of-the-art Mask Region-based Convolutional Neural Networks (Mask R-CNN) to automatically detect and separately extract anatomies of mosquitoes - thorax, wings, abdomen and legs from mosquito images. For this framework, we prepared a training dataset consists of 1500 smartphone images annotated with their mask anatomies across nine mosquito specimens trapped in Florida. In this framework, first, we classify objects of interest (foreground) from the background within the image. Then, we segment pixels containing anatomical components in the foreground by adding a branch to mask (i.e., extract pixels of) that component in the image, and in parallel we add two more branches to localize and classify the extracted anatomical components. The mAP for mask with 0.3, 0.5 and 0.7 IoUs are 0.625, 0.6, and 0.51 for validation dataset. The testing dataset mAP for mask with 0.3, 0.5 and 0.7 IoUs are 0.535, 0.524, and 0.412. Further, we have done feasibility study of anatomy (thorax, wing, abdomen and leg) based classification for genus identification to improve the prediction accuracy for 3 genus category - Aedes, Anopheles and Culex. In this work, we conducted a feasibility study to identify these 3 mosquito genus from their smartphone images and anatomy-based deep neural network classification model. Very low intraclass variance among these mosquitoes genus and low quality images make this problem more challenging. To overcome this, we employed bilinear CNN architecture for our neural network model that works best in this scenario. we extracted four anatomies (thorax, abdomen, wings and legs) from each mosquito image and trained an independent model for each anatomy for genus classification. We also ensemble these models to compute the aggregated results. Our ensemble and 4 independent anatomy (thorax, abdomen, wing and leg) based model achieved 91%, 87.33%, 81%, 75.80% and 68.02% accuracies respectively on test data

    A Machine Learning Framework to Classify Mosquito Species from Smart-phone Images

    Get PDF
    Mosquito borne diseases have been a constant scourge across the globe resulting in numerous diseases with debilitating consequences, and also death. To derive trends on population of mosquitoes in an area, trained personnel lay traps, and after collecting trapped specimens, they spend hours under a microscope to inspect each specimen for identifying the actual species and logging it. This is vital, because multiple species of mosquitoes can reside in any area, and the vectors that some of them carry are not the same ones carried by others. The species identification process is naturally laborious, and imposes severe cognitive burden, since sometimes, hundreds of mosquitoes can get trapped. Most importantly, common citizens cannot aid in this task. In this paper, we design a system based on smart-phone images for mosquito species identification, that integrates image processing, feature selection, unsupervised clustering, and support vector machine based algorithm for classification. Results with a total of 101 female mosquito specimens spread across 9 different vector carrying species (that were captured from a real outdoor trap) demonstrate an overall accuracy of 77% in species identification. When implemented as a smart-phone app, the latency and energy consumption were minimal. In terms of practical impact, common citizens can benefit from our system to identify mosquito species by themselves, and also share images to local/ global mosquito control centers. In economically disadvantaged areas across the globe, tools like these can enable novel citizen-science enabled mechanisms to combat spread of mosquitoes

    Leveraging smart-phone cameras and image processing techniques to classify mosquito genus and species

    No full text
    Identifying insect species integrates image processing, feature selection, unsupervised clustering, and a support vector machine (SVM) learning algorithm for classification. Results with a total of 101 mosquito specimens spread across nine different vector carrying species demonstrate high accuracy in species identification. When implemented as a smart-phone application, the latency and energy consumption were minimal. The currently manual process of species identification and recording can be sped up, while also minimizing the ensuing cognitive workload of personnel. Citizens at large can use the system in their own homes for self-awareness and share insect identification data with public health agencies

    Perception of college-going girls towards corneal donation in North India: A latent class analysis study

    No full text
    Purpose: To assess the perception of college-going girls toward corneal donation in Northern India. Methods: An online survey with a pre-structured, pre-validated questionnaire was conducted on 1721 college-going girls in Northern India. The knowledge and attitude scores were regressed, and latent class analysis was carried out. Results: The average of scores for all participants was computed individually for the knowledge questions and the attitude questions, and based on this score, total participants were divided into two groups: Better corneal donation behaviors (BCDB) and poor corneal donation behaviors. The binomial logistic regression model of knowledge domain for predicting BCDB, age of the participant, their awareness about corneal donation, and willingness to discuss eye donation among family members were found significant. Similarly, for the attitude domain, awareness about corneal donation, knowledge about hours within which ideal eye donation needs to be undertaken, and knowledge about eye donation during coronavirus disease 2019 (COVID-19) pandemic were found to be significant. Latent class analysis identified one subset of participants having poorer knowledge and attitude scores and that they were more from a rural background, were having more than first order as birth order, were belonging to SC/ST classes, had illiterate or secondary education of father and mother, and were living in rented houses. Conclusion: The findings of the study significantly contribute to devising a mechanism to improve knowledge and influencing the attitude about eye donation among the youth, especially young women, who can act as counselors and motivators for the masses as well as their own families, in the generations to come
    corecore